Low-level Floating Point Marshalling Between Different Instruction Level Architectures

نویسنده

  • Iliya Georgiev
چکیده

The paper suggests a method for marshalling floating point numbers in network computing based on the bit-level converting subroutines. Such subroutines are building blocks of hybrid conversion protocols. The method provides speed that is required by real time and transaction applications. A library of conversion subroutines is implemented that could be linked to different applications on different computers. The library is incorporated in commercial transaction packages.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compiler Optimization for Superscalar Systems: Global Instruction Scheduling without Copies

Vol. 10 No. 1 1998 Many of today’s computer applications require computation power not easily achieved by computer architectures that provide little or no parallelism. A promising alternative is the parallel architecture, more specifically, the instruction-level parallel (ILP) architecture, which increases computation during each machine cycle. ILP computers allow parallel computation of the lo...

متن کامل

More Instruction Level Parallelism Explains the Actual Efficiency of Compensated Algorithms

The compensated Horner algorithm and the Horner algorithm with double-double arithmetic improve the accuracy of polynomial evaluation in IEEE-754 floating point arithmetic. Both yield a polynomial evaluation as accurate as if it was computed with the classic Horner algorithm in twice the working precision. Both algorithms also share the same low-level computation of the floating point rounding ...

متن کامل

Impact on Performance of Fused Multiply-Add Units in Aggressive VLIW Architectures

Loops are the main time consuming part of programs based on floating point computations. The performance of the loops is limited either by recurrences in the computation or by the resources offered by the architecture. Several general-purpose superscalar microprocessors have been implemented with multiply-add fused floating-point units, that reduces the latency of the combined operation and the...

متن کامل

Using LISATek for the Design of an ASIP Core including Floating Point Operations

Application specific instruction set processors (ASIPs) recently became more important to overcome compute bottlenecks in digital signal processing systems with tight power constraints. Within the last years commercial tools like the LISATek framework came up, that allow to design ASIP architectures by using their own description language. It shortens the design cycle dramatically compared to c...

متن کامل

High-Level Synthesis under Fixed-Point Accuracy Constraint

Implementing signal processing applications in embedded systems generally requires the use of fixed-point arithmetic. The main problem slowing down the hardware implementation flow is the lack of high-level development tools to target these architectures from algorithmic specification language using floating-point data types. In this paper, a new method to automatically implement a floating-poi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006